29 research outputs found

    Efficient Probabilistic Subsumption Checking for Content-Based Publish/Subscribe Systems

    Get PDF
    Abstract. Efficient subsumption checking, deciding whether a subscription or publication is covered by a set of previously defined subscriptions, is of paramount importance for publish/subscribe systems. It provides the core system functionality—matching of publications to subscriber needs expressed as subscriptions—and additionally, reduces the overall system load and generated traffic since the covered subscriptions are not propagated in distributed environments. As the subsumption problem was shown previously to be co-NP complete and existing solutions typically apply pairwise comparisons to detect the subsumption relationship, we propose a ‘Monte Carlo type ’ probabilistic algorithm for the general subsumption problem. It determines whether a publication/subscription is covered by a disjunction of subscriptions in O(k md), wherek is the number of subscriptions, m is the number of distinct attributes in subscriptions, and d is the number of tests performed to answer a subsumption question. The probability of error is problem-specific and typically very small, and sets an upper bound on d. Our experimental results show significant gains in term of subscription set reduction which has favorable impact on the overall system performance as it reduces the total computational costs and networking traffic. Furthermore, the expected theoretical bounds underestimate algorithm performance because it performs much better in practice due to introduced optimizations, and is adequate for fast forwarding of subscriptions in case of high subscription rate.

    Fast Probabilistic Subsumption Checking for Publish/Subscribe Systems

    Get PDF
    Efficient subsumption checking, deciding whether a subscription or publication is subsumed (covered) by a set of previously defined subscriptions, is of paramount importance for publish/subscribe systems. It provides the core system functionality, and additionally, reduces the overall system load and generated traffic in distributed environments. As the deterministic solution was shown previously to be co-NP complete and existing solutions typically employ costly pairwise comparisons to detect the subsumption relationship, we propose a probabilistic algorithm for the general subsumption problem. It efficiently determines whether a publication/subscription is covered by a disjunction of subscriptions in O(k m d)O(k~m~d), where kk is the number of subscriptions, mm is the number of distinct attributes in subscriptions, and dd is the number of tests performed to answer a subsumption question. The probability of error is problem specific and typically very small, and determines an upper bound on dd in polynomial time prior to the algorithm execution. Our experimental results demonstrate the algorithm performs even better in practice due to introduced optimizations, and is adequate for fast forwarding of publications/subscriptions, especially in resource scarce environments, e.g. sensor networks

    Semantic Interoperability in Global Information Systems

    No full text
    Internet, Web and distributed computing infrastructures continue to gain in popularity as a means of communication for organizations, groups and individuals alike. In such an environment, characterized by large distributed, autonomous, diverse, and dynamic information sources, access to relevant and accurate information is becoming increasingly complex. This complexity is exacerbated by the evolving system, semantic and structural heterogeneity of these potentially global, cross-disciplinary, multicultural and rich-media technologies. Clearly, solutions to these challenges require addressing directly a variety of interoperability issues

    PWSMS: A Peer-to-Peer Web service Management System for Data Sharing in Collaborative Environments

    No full text
    International audienceLater onplus tar

    A Semantic Web Service Management System for Data Sharing in Collaborative Environments

    No full text
    National audiencesummaryrésum

    Composing and Optimizing Data Providing Web Services

    No full text
    National audiencelater o

    Third International Workshop on Databases, Information Systems and Peer-to-Peer Computing (DBISP2P 2005)

    No full text
    The aim of this third workshop is to explore the promise of P2P to offer exciting new possibilities in distributed information processing and database technologies. The realization of this promise lies fundamentally in the availability of enhanced services such as structured ways for classifying and registering shared information, verification and certification of information, content distributed schemes and quality of content, security features, information discovery and accessibility, interoperation and composition of active information services, and finally market-based mechanisms to allow cooperative and non cooperative information exchanges. The P2P paradigm lends itself to constructing large scale complex, adaptive, autonomous and heterogeneous database and information systems, endowed with clearly specified and differential capabilities to negotiate, bargain, coordinate and self-organize the information exchanges in large scale networks. This vision will have a radical impact on the structure of complex organizations (business, scientific or otherwise) and on the emergence and the formation of social communities, and on how the information is organized and processed.The P2P information paradigm naturally encompasses static and wireless connectivity, and static and mobile architectures. Wireless connectivity combined with the increasingly small and powerful mobile devices and sensors pose new challenges as well as opportunities to the database community. Information becomes ubiquitous, highly distributed and accessible anywhere and at any time over highly dynamic, unstable networks with very severe constraints on the information management and processing capabilities. What techniques and data models may be appropriate for this environment, and yet guarantee or approach the performance, versatility and capability that users and developers come to enjoy in traditional static, centralized and distributed database environment? Is there a need to define new notions of consistency and durability, and completeness, for example?The proposed workshop will build on the success of the two preceding editions at VLDB 2003 and 2004. It will concentrate on exploring the synergies between current database research and P2P computing. It is our belief that database research has much to contribute to the P2P grand challenge through its wealth of techniques for sophisticated semantics-based data models, new indexing algorithms and efficient data placement, query processing techniques and transaction processing. Database technologies in the new information age will form the crucial components of the first generation of complex adaptive P2P information systems, which will be characterized by their ability to continuously self-organize, adapt to new circumstances, promote emergence as an inherent property, optimize locally but not necessarily globally, deal with approximation and incompleteness. This workshop will also concentrate on the impact of complex adaptive information systems on current database technologies and their relation to emerging industrial technologies such as IBM's autonomic computing initiative.The workshop will be co-located with VLDB, the major international database and information systems conference, and will bring together key researchers from all over the world working on databases and P2P computing with the intention of strengthening this connection. Researchers from other related areas such as distributed systems, networks, multi-agent systems and complex systems will also be invited
    corecore